2,265 research outputs found

    Dealing With Misbehavior In Distributed Systems: A Game-Theoretic Approach

    Get PDF
    Most distributed systems comprise autonomous entities interacting with each other to achieve their objectives. These entities behave selfishly when making decisions. This behavior may result in strategical manipulation of the protocols thus jeopardizing the system wide goals. Micro-economics and game theory provides suitable tools to model such interactions. We use game theory to model and study three specific problems in distributed systems. We study the problem of sharing the cost of multicast transmissions and develop mechanisms to prevent cheating in such settings. We study the problem of antisocial behavior in a scheduling mechanism based on the second price sealed bid auction. We also build models using extensive form games to analyze the interactions of the attackers and the defender in a security game involving honeypots. Multicast cost sharing is an important problem and very few distributed strategyproof mechanisms exist to calculate the costs shares of the users. These mechanisms are susceptible to manipulation by rational nodes. We propose a faithful mechanism which uses digital signatures and auditing to catch and punish the cheating nodes. Such mechanism will incur some overhead. We deployed the proposed and existing mechanisms on planet-lab to experimentally analyze the overhead and other relevant economic properties of the proposed and existing mechanisms. In a second price sealed bid auction, even though the bids are sealed, an agent can infer the private values of the winning bidders, if the auction is repeated for related items. We study this problem from the perspective of a scheduling mechanism and develop an antisocial strategy which can be used by an agent to inflict losses on the other agents. In a security system attackers and defender(s) interact with each other. Examples of such systems are the honeynets which are used to map the activities of the attackers to gain valuable insight about their behavior. The attackers want to evade the honeypots while the defenders want them to attack the honeypots. These interesting interactions form the basis of our research where we develop a model used to analyze the interactions of an attacker and a honeynet system

    Characterizing and Optimizing End-to-End Systems for Private Inference

    Full text link
    Increasing privacy concerns have given rise to Private Inference (PI). In PI, both the client's personal data and the service provider's trained model are kept confidential. State-of-the-art PI protocols combine several cryptographic primitives: Homomorphic Encryption (HE), Secret Sharing (SS), Garbled Circuits (GC), and Oblivious Transfer (OT). Today, PI remains largely arcane and too slow for practical use, despite the need and recent performance improvements. This paper addresses PI's shortcomings with a detailed characterization of a standard high-performance protocol to build foundational knowledge and intuition in the systems community. The characterization pinpoints all sources of inefficiency -- compute, communication, and storage. A notable aspect of this work is the use of inference request arrival rates rather than studying individual inferences in isolation. Prior to this work, and without considering arrival rate, it has been assumed that PI pre-computations can be handled offline and their overheads ignored. We show this is not the case. The offline costs in PI are so high that they are often incurred online, as there is insufficient downtime to hide pre-compute latency. We further propose three optimizations to address the computation (layer-parallel HE), communication (wireless slot allocation), and storage (Client-Garbler) overheads leveraging insights from our characterization. Compared to the state-of-the-art PI protocol, the optimizations provide a total PI speedup of 1.8×\times, with the ability to sustain inference requests up to a 2.24×\times greater rate.Comment: 12 figure

    Constraints on the χ_(c1) versus χ_(c2) polarizations in proton-proton collisions at √s = 8 TeV

    Get PDF
    The polarizations of promptly produced χ_(c1) and χ_(c2) mesons are studied using data collected by the CMS experiment at the LHC, in proton-proton collisions at √s=8  TeV. The χ_c states are reconstructed via their radiative decays χ_c → J/ψγ, with the photons being measured through conversions to e⁺e⁻, which allows the two states to be well resolved. The polarizations are measured in the helicity frame, through the analysis of the χ_(c2) to χ_(c1) yield ratio as a function of the polar or azimuthal angle of the positive muon emitted in the J/ψ → μ⁺μ⁻ decay, in three bins of J/ψ transverse momentum. While no differences are seen between the two states in terms of azimuthal decay angle distributions, they are observed to have significantly different polar anisotropies. The measurement favors a scenario where at least one of the two states is strongly polarized along the helicity quantization axis, in agreement with nonrelativistic quantum chromodynamics predictions. This is the first measurement of significantly polarized quarkonia produced at high transverse momentum
    corecore